Exploiting Structure for Tractable Nonconvex Optimization
نویسندگان
چکیده
MAP inference in continuous probabilistic models has largely been restricted to convex density functions in order to guarantee tractability of the underlying model, since high-dimensional nonconvex optimization problems contain a combinatorial number of local minima, making them extremely challenging for convex optimization techniques. This choice has resulted in significant computational advantages but a loss in model expressivity. We present a novel approach to nonconvex optimization that overcomes this tradeoff by exploiting local structure in the objective function, greatly expanding the class of tractable, continuous probabilistic models. Our algorithm optimizes a subset of variables such that near the minimum the remaining variables decompose into approximately independent subsets, and recurses on these. Finding the global minimum in this way is exponentially faster than using convex optimization with restarts.
منابع مشابه
An Efficient Neurodynamic Scheme for Solving a Class of Nonconvex Nonlinear Optimization Problems
By p-power (or partial p-power) transformation, the Lagrangian function in nonconvex optimization problem becomes locally convex. In this paper, we present a neural network based on an NCP function for solving the nonconvex optimization problem. An important feature of this neural network is the one-to-one correspondence between its equilibria and KKT points of the nonconvex optimizatio...
متن کاملAn effective optimization algorithm for locally nonconvex Lipschitz functions based on mollifier subgradients
متن کامل
A Tensor Analogy of Yuan's Theorem of the Alternative and Polynomial Optimization with Sign structure
Yuan’s theorem of the alternative is an important theoretical tool in optimization, which provides a checkable certificate for the infeasibility of a strict inequality system involving two homogeneous quadratic functions. In this paper, we provide a tractable extension of Yuan’s theorem of the alternative to the symmetric tensor setting. As an application, we establish that the optimal value of...
متن کاملA Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials
The minimax theorem for a convex-concave bifunction is a fundamental theorem in optimization and convex analysis, and has a lot of applications in economics. In the last two decades, a nonconvex extension of this minimax theorem has been well studied under various generalized convexity assumptions. In this note, by exploiting the hidden convexity (joint range convexity) of separable homogeneous...
متن کاملQuasi-Newton Methods for Nonconvex Constrained Multiobjective Optimization
Here, a quasi-Newton algorithm for constrained multiobjective optimization is proposed. Under suitable assumptions, global convergence of the algorithm is established.
متن کامل